86 research outputs found

    Inference on a Distribution from Noisy Draws

    Get PDF
    We consider a situation where the distribution of a random variable is being estimated by the empirical distribution of noisy measurements of that variable. This is common practice in, for example, teacher value-added models and other fixed-effect models for panel data. We use an asymptotic embedding where the noise shrinks with the sample size to calculate the leading bias in the empirical distribution arising from the presence of noise. The leading bias in the empirical quantile function is equally obtained. These calculations are new in the literature, where only results on smooth functionals such as the mean and variance have been derived. Given a closed-form expression for the bias, bias-corrected estimator of the distribution function and quantile function can be constructed. We provide both analytical and jackknife corrections that recenter the limit distribution and yield confidence intervals with correct coverage in large samples. These corrections are non-parametric and easy to implement. Our approach can be connected to corrections for selection bias and shrinkage estimation and is to be contrasted with deconvolution. Simulation results confirm the much-improved sampling behavior of the corrected estimators.Comment: 24 pages main text, 22 pages appendix (including references

    Split-panel jackknife estimation of fixed-effect models

    Get PDF
    We propose a jackknife for reducing the order of the bias of maximum likelihood estimates of nonlinear dynamic fixed-effect panel models. In its simplest form, the half-panel jackknife, the estimatorisjust 2θˆ−θ1/2,where θˆ!istheMLEfromthefullpaneland θ1/2 istheaverageofthe two half-panel MLEs, each using T/2 time periods and all N cross-sectional units. This estimatoreliminates the first-order bias of θˆ . The order of the bias is further reduced if two partitions of the panel are used, for example, two half-panels and three 1/3-panels, and the corresponding MLEs.On further partitioning the panel, any order of bias reduction can be achieved. The split-panel jackknife estimators are asymptotically normal, centered at the true value, with variance equal to that of the MLE under asymptotics where T is allowed to grow slowly with N. In analogous fashion, the split-panel jackknife reduces the bias of the profile likelihood and the bias of marginal-effect estimates. Simulations in fixed-effect dynamic discrete-choice models with small T show that the split-panel jackknife effectively reduces the bias and mean squared error of the MLE, and yields confidence intervals with much better coverage.jackknife, asymptotic bias correction, dynamic panel data, fixed effects

    First-differencing in panel data models with incidental functions

    Get PDF
    I discuss the fixed-effect estimation of panel data models with time-varying excess heterogeneity across cross-sectional units. These latent components are not given a parametric form. A modification to traditional first-differencing is motivated which, asymptotically, removes the permanent unobserved heterogeneity from the differenced model. Conventional estimation techniques can then be readily applied. Distribution theory for a kernel-weighted GMM estimator under large-n and fixed-T asymptotics is developed. The estimator is put to work in a series of numerical experiments to static and dynamic models

    Modified-likelihood estimation of the b-model

    Get PDF
    We consider point estimation and inference based on modifications of the profile likelihood in models for dyadic interactions between agents featuring n agent-specific parameters. This setup covers the b-model of network formation and generalizations thereof. The maximum-likelihood estimator of such models has bias and standard deviation of O(n−1) and so is asymptotically biased. Estimation based on modified likelihoods leads to estimators that are asymptotically unbiased and likelihood-ratio tests that exhibit correct size. We apply the modifications to versions of the b-model for network formation and of the Bradley-Terry model for paired comparisons

    Profile-score Adjustements for Nonlinearfixed-effect Models

    Get PDF
    Maximum-likelihood estimates of nonlinear panel data models with fixed effects are generally not consistent as the number of units, N, grows large while the number of time periods, T, stays fixed. The inconsistency can be viewed as a consequence of the bias of the score function, where the unit-specific parameters have been profiled out. We investigate ways of adjusting the profile score so as to make it unbiased or approximately unbiased. This leads to estimators, solving an adjusted profile score equation, that are fixed-T consistent or have less asymptotic bias, as T ! 1, than maximum likelihood. One approach to adjusting the profile score is to subtract its bias, evaluated at maximum- likelihood estimates of the fixed effects. When this bias does not depend on the incidental parameters, the adjustment is exact. Otherwise, it does not eliminate the bias entirely but reduces its order (in T), and it can be iterated, reducing the bias order further. We examine a range of nonlinear models with additive fixed effects. In many of these, an exact bias adjustment of the profile score is possible. In others, suitably adjusted profile scores exhibit much less bias than without the adjustment, even for very small T

    Likelihood inference in an Autoregression with fixed effects

    Get PDF
    We calculate the bias of the profile score for the regression coefficients in a multistratum autoregressive model with stratum-specific intercepts. The bias is free of incidental parameters. Centering the profile score delivers an unbiased estimating equation and, upon integration, an adjusted profile likelihood. A variety of other approaches to constructing modified profile likelihoods are shown to yield equivalent results. However, the global maximizer of the adjusted likelihood lies at infinity for any sample size, and the adjusted profile score has multiple zeros. We argue that the parameters are local maximizers inside or on an ellipsoid centered at the maximum likelihood estimator

    Testing Random Assignment To Peer Groups

    Get PDF
    Identification of peer effects is complicated by the fact that the individuals under study may self-select their peers. Random assignment to peer groups has proven useful to sidestep such a concern. In the absence of a formal randomization mechanism it needs to be argued that assignment is `as good as' random. This paper introduces a simple yet powerful test to do so. We provide theoretical results for this test. As a by-product we equally obtain such results for an approach popularized by Guryan, Kroft and Notowidigdo (2009). These results help to explain why this approach suffers from low power, as has been observed elsewhere. Our approach can equally be used to test for the presence of peer effects in the linear-in-means model without modification
    corecore